Goto

Collaborating Authors

 social behavior




Why do cats lick you? An expert explains.

Popular Science

Why do cats lick you? Grooming is only one way cats say, I love you." Some cats shower their favorite humans with sandpaper kisses. Breakthroughs, discoveries, and DIY tips sent six days a week. If you've ever been around a cat, you know they can get the sudden urge to groom themselves at just about any moment. Everything seems lovely and content. Then, they lose all interest in you and start licking their butt. A cat will be busy grooming themselves. Other cats can't be bothered and won't ever groom or lick their human friends, or other kitty friends for that matter. So, why do some cats lick their owners? Are they trying to clean you, too? We asked an animal behaviorist and cat expert to help us sort out exactly what is going on when your cat licks you. For a mother cat, grooming is an important part of child rearing. When a mama cat licks her kittens it serves two important purposes: keeping her kittens clean and promoting social bonds, Kristyn Vitale, an animal behaviorist at Maueyes Cat Science and Education tells . On the one hand, "mother cats are going to groom their kittens to help keep them clean and healthy," says Vitale. Kittens can be especially susceptible to diseases, and "anybody who's raised young kittens knows how dirty they can get, and a mother cat is not going to obviously bathe their kitten in a tub.

  Country: Asia > Thailand (0.05)
  Genre: Research Report > New Finding (0.50)
  Industry: Media > Photography (0.31)


Human Behavior Atlas: Benchmarking Unified Psychological and Social Behavior Understanding

Ong, Keane, Dai, Wei, Li, Carol, Feng, Dewei, Li, Hengzhi, Wu, Jingyao, Cheong, Jiaee, Mao, Rui, Mengaldo, Gianmarco, Cambria, Erik, Liang, Paul Pu

arXiv.org Artificial Intelligence

Using intelligent systems to perceive psychological and social behaviors, that is, the underlying affective, cognitive, and pathological states that are manifested through observable behaviors and social interactions, remains a challenge due to their complex, multifaceted, and personalized nature. Existing work tackling these dimensions through specialized datasets and single-task systems often miss opportunities for scalability, cross-task transfer, and broader generalization. To address this gap, we curate Human Behavior Atlas, a unified benchmark of diverse behavioral tasks designed to support the development of unified models for understanding psychological and social behaviors. Human Behavior Atlas comprises over 100,000 samples spanning text, audio, and visual modalities, covering tasks on affective states, cognitive states, pathologies, and social processes. Our unification efforts can reduce redundancy and cost, enable training to scale efficiently across tasks, and enhance generalization of behavioral features across domains. On Human Behavior Atlas, we train three models: OmniSapiens-7B SFT, OmniSapiens-7B BAM, and OmniSapiens-7B RL. We show that training on Human Behavior Atlas enables models to consistently outperform existing multimodal LLMs across diverse behavioral tasks. Pretraining on Human Behavior Atlas also improves transfer to novel behavioral datasets; with the targeted use of behavioral descriptors yielding meaningful performance gains.



Towards Simulating Social Influence Dynamics with LLM-based Multi-agents

Lin, Hsien-Tsung, Huang, Pei-Cing, Ku, Chan-Tung, Hsu, Chan, Shieh, Pei-Xuan, Kang, Yihuang

arXiv.org Artificial Intelligence

-- Recent advancements in Large Language Models offer promising capabilities to simulate complex human social interactions. We investigate whether LLM - based multi - agent simulations can reproduce core human social dynamics observed in online forums. We evaluate conformity dynamics, group polarizat ion, and fragmentation across different model scales and reasoning capabilities using a structured simulation framework. Our findings indicate that smaller models exhibit higher conformity rates, whereas models optimized for reasoning are more resistant to social influence. Recent advancements in machine learning, particularly in Large Language Models (LLMs), have substantially enhanced the capability of machines to emulate human language patterns, cognitive processes, and interactive behaviors [1].


Can Language Models Understand Social Behavior in Clinical Conversations?

Bedmutha, Manas Satish, Chen, Feng, Hartzler, Andrea, Cohen, Trevor, Weibel, Nadir

arXiv.org Artificial Intelligence

Effective communication between providers and their patients influences health and care outcomes. The effectiveness of such conversations has been linked not only to the exchange of clinical information, but also to a range of interpersonal behaviors; commonly referred to as social signals, which are often conveyed through non-verbal cues and shape the quality of the patient-provider relationship. Recent advances in large language models (LLMs) have demonstrated an increasing ability to infer emotional and social behaviors even when analyzing only textual information. As automation increases also in clinical settings, such as for transcription of patient-provider conversations, there is growing potential for LLMs to automatically analyze and extract social behaviors from these interactions. To explore the foundational capabilities of LLMs in tracking social signals in clinical dialogue, we designed task-specific prompts and evaluated model performance across multiple architectures and prompting styles using a highly imbalanced, annotated dataset spanning 20 distinct social signals such as provider dominance, patient warmth, etc. We present the first system capable of tracking all these 20 coded signals, and uncover patterns in LLM behavior. Further analysis of model configurations and clinical context provides insights for enhancing LLM performance on social signal processing tasks in healthcare settings.


AgentSociety: Large-Scale Simulation of LLM-Driven Generative Agents Advances Understanding of Human Behaviors and Society

Piao, Jinghua, Yan, Yuwei, Zhang, Jun, Li, Nian, Yan, Junbo, Lan, Xiaochong, Lu, Zhihong, Zheng, Zhiheng, Wang, Jing Yi, Zhou, Di, Gao, Chen, Xu, Fengli, Zhang, Fang, Rong, Ke, Su, Jun, Li, Yong

arXiv.org Artificial Intelligence

Understanding human behavior and society is a central focus in social sciences, with the rise of generative social science marking a significant paradigmatic shift. By leveraging bottom-up simulations, it replaces costly and logistically challenging traditional experiments with scalable, replicable, and systematic computational approaches for studying complex social dynamics. Recent advances in large language models (LLMs) have further transformed this research paradigm, enabling the creation of human-like generative social agents and realistic simulacra of society. In this paper, we propose AgentSociety, a large-scale social simulator that integrates LLM-driven agents, a realistic societal environment, and a powerful large-scale simulation engine. Based on the proposed simulator, we generate social lives for over 10k agents, simulating their 5 million interactions both among agents and between agents and their environment. Furthermore, we explore the potential of AgentSociety as a testbed for computational social experiments, focusing on four key social issues: polarization, the spread of inflammatory messages, the effects of universal basic income policies, and the impact of external shocks such as hurricanes. These four issues serve as valuable cases for assessing AgentSociety's support for typical research methods -- such as surveys, interviews, and interventions -- as well as for investigating the patterns, causes, and underlying mechanisms of social issues. The alignment between AgentSociety's outcomes and real-world experimental results not only demonstrates its ability to capture human behaviors and their underlying mechanisms, but also underscores its potential as an important platform for social scientists and policymakers.


Planning, Living and Judging: A Multi-agent LLM-based Framework for Cyclical Urban Planning

Ni, Hang, Wang, Yuzhi, Liu, Hao

arXiv.org Artificial Intelligence

Urban regeneration presents significant challenges within the context of urbanization, requiring adaptive approaches to tackle evolving needs. Leveraging advancements in large language models (LLMs), we propose Cyclical Urban Planning (CUP), a new paradigm that continuously generates, evaluates, and refines urban plans in a closed-loop. Specifically, our multi-agent LLM-based framework consists of three key components: (1) Planning, where LLM agents generate and refine urban plans based on contextual data; (2) Living, where agents simulate the behaviors and interactions of residents, modeling life in the urban environment; and (3) Judging, which involves evaluating plan effectiveness and providing iterative feedback for improvement. The cyclical process enables a dynamic and responsive planning approach. Experiments on the real-world dataset demonstrate the effectiveness of our framework as a continuous and adaptive planning process.